Title:
Google Brain Tokyo Talk: Exploring Deep Learning for Classical Japanese Literature, Machine Creativity, and Recurrent World Models!
Abstract:
Deep generative models are proving to become powerful methods to generate realistic media, such as images, speech, and even video. However, they are also seen to be black boxes that are without much interpretability. My recent research interest has been to investigate the abstract representations created using deep generative models. Our group has shown that understanding the latent space of these models not only allows deep neural networks to become more interpretable, but also opens up vast applications.
In this talk, I will highlight some recent applications of generative models to the domain of classical Japanese literature. I will also be talking about potential use case of machine learning algorithms in creative applications, and discuss whether we believe the algorithms are just a tool for an artist, or if there is something inherently creative about an algorithm. Finally, we discuss some interesting applications of using generative models for generating reinforcement learning game environments.